computationally-bounded awareness
Fairness Through Computationally-Bounded Awareness
We study the problem of fair classification within the versatile framework of Dwork et al. [ITCS '12], which assumes the existence of a metric that measures similarity between pairs of individuals. Unlike earlier work, we do not assume that the entire metric is known to the learning algorithm; instead, the learner can query this metric a bounded number of times. We propose a new notion of fairness called and show how to achieve this notion in our setting. Metric multifairness is parameterized by a similarity metric d on pairs of individuals to classify and a rich collection C of (possibly overlapping) comparison sets over pairs of individuals. At a high level, metric multifairness guarantees that, as long as these subpopulations are identified within the class C.
Reviews: Fairness Through Computationally-Bounded Awareness
The paper builds on the influential "Fairness Through Awareness" paper of Dwork et al., which provided a framework for learning non-discriminatory classifiers by requiring the the classifier to treat similar individuals similarly. This similarity was defined by a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand. This paper extends the results to the more realistic scenario when the entire metric is not known to the learner, while relaxing the requirement that all pairs of similar individuals be treated similarly. The paper provides both learning algorithms and hardness results for this model. Fairness in machine learning is still a very young field, with many competing models and definitions, and more questions than answers.
Fairness Through Computationally-Bounded Awareness
Kim, Michael, Reingold, Omer, Rothblum, Guy
We study the problem of fair classification within the versatile framework of Dwork et al. [ITCS '12], which assumes the existence of a metric that measures similarity between pairs of individuals. Unlike earlier work, we do not assume that the entire metric is known to the learning algorithm; instead, the learner can query this *arbitrary* metric a bounded number of times. We propose a new notion of fairness called *metric multifairness* and show how to achieve this notion in our setting. Metric multifairness is parameterized by a similarity metric d on pairs of individuals to classify and a rich collection C of (possibly overlapping) "comparison sets" over pairs of individuals. At a high level, metric multifairness guarantees that *similar subpopulations are treated similarly*, as long as these subpopulations are identified within the class C. Papers published at the Neural Information Processing Systems Conference.